language en

Linked.Archi AI Governance Reference Data

Release: 2026-05-03

Modified on: 2026-05-03
This version:
https://meta.linked.archi/ai-governance/reference-data/0.1.0#
Revision:
0.1.0
Authors:
Kalin Maldzhanski
Publisher:
Linked.Archi
License:
http://creativecommons.org/licenses/by/4.0/
Visualization:
Visualize with WebVowl
Cite as:
Kalin Maldzhanski. Linked.Archi AI Governance Reference Data. Revision: 0.1.0. Retrieved from: https://meta.linked.archi/ai-governance/reference-data/0.1.0#
Provenance of this page
draft

Linked.Archi AI Governance Reference Data: Overview back to ToC

This ontology has the following classes and properties.

Named Individuals

Linked.Archi AI Governance Reference Data: Description back to ToC

Reference data for AI governance — EU AI Act risk levels, OECD AI Principles, human oversight modes, and assessment statuses.

Cross-reference for Linked.Archi AI Governance Reference Data classes, object properties and data properties back to ToC

This section provides details for each class and property defined by Linked.Archi AI Governance Reference Data.

Named Individuals

Accountabilityni back to ToC or Named Individual ToC

IRI: https://meta.linked.archi/ai-governance/reference-data#AccountabilityPrinciple

Organizations and individuals developing, deploying, or operating AI systems should be accountable for their proper functioning in line with applicable regulations and ethical principles.
belongs to
a i principle c

Completedni back to ToC or Named Individual ToC

IRI: https://meta.linked.archi/ai-governance/reference-data#AssessmentCompleted

Assessment has been completed and results are available.
belongs to
engagement status c

Fairnessni back to ToC or Named Individual ToC

IRI: https://meta.linked.archi/ai-governance/reference-data#FairnessPrinciple

AI systems should be designed and operated to avoid unfair bias and discrimination. Includes equitable treatment across demographic groups and protected attributes.
belongs to
a i principle c

High Riskni back to ToC or Named Individual ToC

IRI: https://meta.linked.archi/ai-governance/reference-data#HighRisk

AI systems listed in Annex III of the EU AI Act that pose significant risks to health, safety, or fundamental rights. Includes AI systems used in: critical infrastructure, education, employment, essential services, law enforcement, migration and border control, and administration of justice. Subject to conformity assessment, risk management, data governance, transparency, human oversight, and accuracy requirements.
belongs to
risk level c

Human Agency & Oversightni back to ToC or Named Individual ToC

IRI: https://meta.linked.archi/ai-governance/reference-data#HumanAgencyPrinciple

AI systems should support human agency and fundamental rights. They should be designed to allow appropriate human oversight, including the ability to understand, intervene in, and override AI decisions.
belongs to
a i principle c

Human-in-Commandni back to ToC or Named Individual ToC

IRI: https://meta.linked.archi/ai-governance/reference-data#HumanInCommand

A human sets the objectives, constraints, and boundaries within which the AI system operates autonomously. The human controls the overall system behavior but does not monitor individual decisions.
belongs to
oversight mode c

Human-in-the-Loopni back to ToC or Named Individual ToC

IRI: https://meta.linked.archi/ai-governance/reference-data#HumanInTheLoop

A human reviews and approves every AI decision before it is executed. The AI system provides recommendations; the human makes the final decision. Highest level of human control.
belongs to
oversight mode c

Human-on-the-Loopni back to ToC or Named Individual ToC

IRI: https://meta.linked.archi/ai-governance/reference-data#HumanOnTheLoop

The AI system operates autonomously but a human monitors its decisions and can intervene when anomalies are detected. The human does not approve every decision but maintains oversight.
belongs to
oversight mode c

In Progressni back to ToC or Named Individual ToC

IRI: https://meta.linked.archi/ai-governance/reference-data#AssessmentInProgress

Assessment is currently being conducted.
belongs to
engagement status c

Limited Riskni back to ToC or Named Individual ToC

IRI: https://meta.linked.archi/ai-governance/reference-data#LimitedRisk

AI systems with specific transparency obligations under the EU AI Act. Includes AI systems that interact with natural persons (chatbots), generate or manipulate content (deepfakes), and emotion recognition or biometric categorization systems. Must disclose AI involvement to users.
belongs to
risk level c

Minimal Riskni back to ToC or Named Individual ToC

IRI: https://meta.linked.archi/ai-governance/reference-data#MinimalRisk

AI systems with no specific regulatory obligations under the EU AI Act. Includes spam filters, AI-enabled video games, and inventory management systems. Voluntary codes of conduct may apply.
belongs to
risk level c

Overdueni back to ToC or Named Individual ToC

IRI: https://meta.linked.archi/ai-governance/reference-data#AssessmentOverdue

Assessment is past its scheduled date and has not been completed.
belongs to
engagement status c

Plannedni back to ToC or Named Individual ToC

IRI: https://meta.linked.archi/ai-governance/reference-data#AssessmentPlanned

Assessment has been scheduled but not yet started.
belongs to
engagement status c

Privacy & Data Governanceni back to ToC or Named Individual ToC

IRI: https://meta.linked.archi/ai-governance/reference-data#PrivacyPrinciple

AI systems should respect privacy and data protection throughout their lifecycle. Data used for training and operation should be collected, used, and stored in compliance with applicable data protection regulations (GDPR, CCPA).
belongs to
a i principle c

Safety & Robustnessni back to ToC or Named Individual ToC

IRI: https://meta.linked.archi/ai-governance/reference-data#SafetyPrinciple

AI systems should be safe, secure, and robust throughout their lifecycle. They should be resilient against attempts to alter their use or performance, and should function appropriately under adversarial conditions.
belongs to
a i principle c

Societal & Environmental Wellbeingni back to ToC or Named Individual ToC

IRI: https://meta.linked.archi/ai-governance/reference-data#SocialWellbeingPrinciple

AI systems should benefit people and the planet. Their broader societal and environmental impact should be considered, including sustainability, social impact, and democratic values.
belongs to
a i principle c

Transparencyni back to ToC or Named Individual ToC

IRI: https://meta.linked.archi/ai-governance/reference-data#TransparencyPrinciple

AI systems should operate transparently — stakeholders should be informed when they are interacting with AI, and meaningful information about the system's logic, capabilities, and limitations should be available.
belongs to
a i principle c

Unacceptable Riskni back to ToC or Named Individual ToC

IRI: https://meta.linked.archi/ai-governance/reference-data#UnacceptableRisk

AI practices that are prohibited under the EU AI Act (Article 5). Includes social scoring by public authorities, real-time remote biometric identification in publicly accessible spaces for law enforcement (with limited exceptions), exploitation of vulnerabilities of specific groups, and subliminal manipulation.
belongs to
risk level c

Legend back to ToC

ni: Named Individuals

Acknowledgments back to ToC

The authors would like to thank Silvio Peroni for developing LODE, a Live OWL Documentation Environment, which is used for representing the Cross Referencing Section of this document and Daniel Garijo for developing Widoco, the program used to create the template used in this documentation.